5 research outputs found
3D Active Metric-Semantic SLAM
In this letter, we address the problem of exploration and metric-semantic
mapping of multi-floor GPS-denied indoor environments using Size Weight and
Power (SWaP) constrained aerial robots. Most previous work in exploration
assumes that robot localization is solved. However, neglecting the state
uncertainty of the agent can ultimately lead to cascading errors both in the
resulting map and in the state of the agent itself. Furthermore, actions that
reduce localization errors may be at direct odds with the exploration task. We
propose a framework that balances the efficiency of exploration with actions
that reduce the state uncertainty of the agent. In particular, our algorithmic
approach for active metric-semantic SLAM is built upon sparse information
abstracted from raw problem data, to make it suitable for SWaP-constrained
robots. Furthermore, we integrate this framework within a fully autonomous
aerial robotic system that achieves autonomous exploration in cluttered, 3D
environments. From extensive real-world experiments, we showed that by
including Semantic Loop Closure (SLC), we can reduce the robot pose estimation
errors by over 90% in translation and approximately 75% in yaw, and the
uncertainties in pose estimates and semantic maps by over 70% and 65%,
respectively. Although discussed in the context of indoor multi-floor
exploration, our system can be used for various other applications, such as
infrastructure inspection and precision agriculture where reliable GPS data may
not be available.Comment: Submitted to RA-L for revie
SEER: Safe Efficient Exploration for Aerial Robots using Learning to Predict Information Gain
We address the problem of efficient 3-D exploration in indoor environments
for micro aerial vehicles with limited sensing capabilities and payload/power
constraints. We develop an indoor exploration framework that uses learning to
predict the occupancy of unseen areas, extracts semantic features, samples
viewpoints to predict information gains for different exploration goals, and
plans informative trajectories to enable safe and smart exploration. Extensive
experimentation in simulated and real-world environments shows the proposed
approach outperforms the state-of-the-art exploration framework by 24% in terms
of the total path length in a structured indoor environment and with a higher
success rate during exploration
Learning to Explore Indoor Environments using Autonomous Micro Aerial Vehicles
In this paper, we address the challenge of exploring unknown indoor aerial
environments using autonomous aerial robots with Size Weight and Power (SWaP)
constraints. The SWaP constraints induce limits on mission time requiring
efficiency in exploration. We present a novel exploration framework that uses
Deep Learning (DL) to predict the most likely indoor map given the previous
observations, and Deep Reinforcement Learning (DRL) for exploration, designed
to run on modern SWaP constraints neural processors. The DL-based map predictor
provides a prediction of the occupancy of the unseen environment while the
DRL-based planner determines the best navigation goals that can be safely
reached to provide the most information. The two modules are tightly coupled
and run onboard allowing the vehicle to safely map an unknown environment.
Extensive experimental and simulation results show that our approach surpasses
state-of-the-art methods by 50-60% in efficiency, which we measure by the
fraction of the explored space as a function of the length of the trajectory
traveled.Comment: Submitted to ICRA2024 for revie
Large-scale Autonomous Flight with Real-time Semantic SLAM under Dense Forest Canopy
Semantic maps represent the environment using a set of semantically
meaningful objects. This representation is storage-efficient, less ambiguous,
and more informative, thus facilitating large-scale autonomy and the
acquisition of actionable information in highly unstructured, GPS-denied
environments. In this letter, we propose an integrated system that can perform
large-scale autonomous flights and real-time semantic mapping in challenging
under-canopy environments. We detect and model tree trunks and ground planes
from LiDAR data, which are associated across scans and used to constrain robot
poses as well as tree trunk models. The autonomous navigation module utilizes a
multi-level planning and mapping framework and computes dynamically feasible
trajectories that lead the UAV to build a semantic map of the user-defined
region of interest in a computationally and storage efficient manner. A
drift-compensation mechanism is designed to minimize the odometry drift using
semantic SLAM outputs in real time, while maintaining planner optimality and
controller stability. This leads the UAV to execute its mission accurately and
safely at scale. Code is released at:
https://github.com/KumarRobotics/kr_autonomous_flight and
https://github.com/KumarRobotics/sloam.Comment: Xu Liu and Guilherme V. Nardari contributed equally to this wor